There has been a new development that has caused a great resonance in the world of technology. Both former and current employees of OpenAI and Google DeepMind, known as artificial intelligence giants, warned everyone with open letters revealing striking details about the inner workings of the companies. The open letter, signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee, conveys the potential risks of artificial intelligence technologies and serious concerns about the management of these risks through the eyes of the employees. Here are the details…
Is artificial intelligence getting out of control?
The open letter emphasized that AI technologies could further deepen existing inequalities, lead to the spread of manipulation and misinformation, and pose existential threats to humanity if autonomous AI systems lose control.
Noting that companies accept these risks, employees also state that current corporate governance structures are insufficient to solve these problems. It was also revealed that these AI companies have strong financial incentives to avoid sharing information about their risk levels and to avoid audits.
The employees said that broad confidentiality agreements created by a lack of government oversight also prevented them from speaking out about the risks:
“The Right to Warn About Advanced Artificial Intelligence
As people who work or have worked in advanced AI companies, we believe in the potential of AI technology to bring unprecedented benefits to humanity.
At the same time, we recognize the serious risks that these technologies pose. These risks range from the exacerbation of existing inequalities, to manipulation and the spread of misinformation, and could lead to the extinction of humanity if autonomous AI systems lose control.
AI companies recognize these risks [OpenAI, Anthropic, Google DeepMind], governments around the world [US, UK, 26 countries in the Bletchley Declaration] and other AI experts recognize these risks.
{{user}} {{datetime}}
{{text}}